Attempting WAN 2.2 I2V video generation on Windows with RTX 4060 8GB VRAM. The 5B fp8 model had rough quality; the 14B Rapid distilled model with lowvram offloading was the practical solution.
Running LTX-2 and Wan 2.2 on an M1 Max 64GB. FP8 doesn't work on Metal, bypassed with GGUF. Wan 2.2 takes 82 minutes for a 2-second video. LTX-2's official pipeline produces NaN on MPS, and the KSampler fallback doesn't reach usable quality.
Right after Seedance 2.0 launched, a torrent of Hollywood IP infringement flooded social networks. Disney, Netflix, and Paramount sent cease-and-desist letters; the API release was postponed indefinitely, and face-cloning and person reference features were disabled.
As of February 2026, the Seedance 2.0 API is not yet public. This article summarizes the outlook for ComfyUI integration once the API is released and the preparations to make.
ByteDance’s Seedance 2.0 has been released on Dreamina. From the perspective of someone who has been using Wan 2.x and ComfyUI locally, I considered how the "ease" differs between local and cloud-based video generation services.
This article organizes the major video-generation AI updates announced in January 2026 and examines whether i2v (image→video) is practically usable, including models that run locally.